Author Login Editor-in-Chief Peer Review Editor Work Office Work

Computer Engineering

   

Privacy-Preserving Algorithm for Federated Learning Against Attacks

  

  • Published:2024-04-19

抗攻击的联邦学习隐私保护算法

Abstract: Federated Learning, as an emerging distributed learning framework, facilitates multiple clients to collectively engage in global model training without sharing raw data, thus effectively safeguarding data privacy. Nevertheless, traditional Federated Learning still harbors latent security vulnerabilities, susceptible to the threats of poisoning attacks and inference attacks. Therefore, to enhance the security and model performance of Federated Learning, it becomes imperative to precisely identify and mitigate malicious client behaviors, while employing gradient noise as a countermeasure to prevent attackers from gaining access to client data through gradient monitoring. This paper proposes a robust Federated Learning framework that combines mechanisms for malicious client detection with local differential privacy techniques. Specifically, the algorithm initially employs gradient similarity for the identification and classification of potential malicious clients, thereby minimizing their adverse impact on model training tasks. Subsequently, a dynamic privacy budget is designed based on local differential privacy, accommodating the sensitivity of different queries and individual privacy requirements, with the objective of achieving a balance between privacy preservation and data quality. Experimental results conducted on MNIST, CIFAR-10, and MR datasets demonstrate that, in comparison to three baseline algorithms, this approach results in an average 3% increase in accuracy for sP-type clients and a 1% increase for other attack methods, consequently achieving a higher security level and significantly enhanced model performance within the Federated Learning framework.

摘要: 联邦学习作为新兴的分布式学习框架,允许多个客户端在不共享原始数据的情况下共同进行全局模型的训练,从而有效保护了数据隐私。然而,传统联邦学习仍然存在潜在的安全隐患,容易受到中毒攻击和推理攻击的威胁。因此,为了提高联邦学习的安全性和模型性能,需要准确地识别和防止恶意客户端的行为,同时采用梯度加噪的方法来避免攻击者通过监控梯度信息来获取客户端的数据。结合恶意客户端检测机制和本地差分隐私技术提出了一种鲁棒的联邦学习框架。具体而言,该算法首先利用梯度相似性来判断和识别潜在的恶意客户端,以最小化对模型训练任务的不良影响。其次,根据不同查询的敏感性以及用户的个体隐私需求设计了一种基于动态隐私预算的本地差分隐私算法,旨在平衡隐私保护和数据质量之间的权衡。在MNIST、CIFAR-10和MR数据集上的实验结果表明,与三种基准算法相比,该算法在准确性方面针对sP类客户端平均提高了3个百分点,针对其他攻击方法平均提高了1个百分点,实现了联邦学习中更高的安全性水平,显著提升了模型性能。